9 research outputs found

    A deep learning approach to urban street functionality prediction based on centrality measures and stacked denoising autoencoder

    Get PDF
    ABSTRACT: In urban planning and transportation management, the centrality characteristics of urban streets are vital measures to consider. Centrality can help in understanding the structural properties of dense traffic networks that affect both human life and activity in cities. Many cities classify urban streets to provide stakeholders with a group of street guidelines for possible new rehabilitation such as sidewalks, curbs, and setbacks. Transportation research always considers street networks as a connection between different urban areas. The street functionality classification defines the role of each element of the urban street network (USN). Some potential factors such as land use mix, accessible service, design goal, and administrators’ policies can affect the movement pattern of urban travelers. In this study, nine centrality measures are used to classify the urban roads in four cities evaluating the structural importance of street segments. In our work, a Stacked Denoising Autoencoder (SDAE) predicts a street’s functionality, then logistic regression is used as a classifier. Our proposed classifier can differentiate between four different classes adopted from the U.S. Department of Transportation (USDT): principal arterial road, minor arterial road, collector road, and local road. The SDAE-based model showed that regular grid configurations with repeated patterns are more influential in forming the functionality of road networks compared to those with less regularity in their spatial structure

    Deep Learning-Based Single Image Super-Resolution: An Investigation for Dense Scene Reconstruction with UAS Photogrammetry

    No full text
    The deep convolutional neural network (DCNN) has recently been applied to the highly challenging and ill-posed problem of single image super-resolution (SISR), which aims to predict high-resolution (HR) images from their corresponding low-resolution (LR) images. In many remote sensing (RS) applications, spatial resolution of the aerial or satellite imagery has a great impact on the accuracy and reliability of information extracted from the images. In this study, the potential of a DCNN-based SISR model, called enhanced super-resolution generative adversarial network (ESRGAN), to predict the spatial information degraded or lost in a hyper-spatial resolution unmanned aircraft system (UAS) RGB image set is investigated. ESRGAN model is trained over a limited number of original HR (50 out of 450 total images) and virtually-generated LR UAS images by downsampling the original HR images using a bicubic kernel with a factor × 4 . Quantitative and qualitative assessments of super-resolved images using standard image quality measures (IQMs) confirm that the DCNN-based SISR approach can be successfully applied on LR UAS imagery for spatial resolution enhancement. The performance of DCNN-based SISR approach for the UAS image set closely approximates performances reported on standard SISR image sets with mean peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index values of around 28 dB and 0.85 dB, respectively. Furthermore, by exploiting the rigorous Structure-from-Motion (SfM) photogrammetry procedure, an accurate task-based IQM for evaluating the quality of the super-resolved images is carried out. Results verify that the interior and exterior imaging geometry, which are extremely important for extracting highly accurate spatial information from UAS imagery in photogrammetric applications, can be accurately retrieved from a super-resolved image set. The number of corresponding keypoints and dense points generated from the SfM photogrammetry process are about 6 and 17 times more than those extracted from the corresponding LR image set, respectively

    Review and Evaluation of Deep Learning Architectures for Efficient Land Cover Mapping with UAS Hyper-Spatial Imagery: A Case Study Over a Wetland

    No full text
    Deep learning has already been proved as a powerful state-of-the-art technique for many image understanding tasks in computer vision and other applications including remote sensing (RS) image analysis. Unmanned aircraft systems (UASs) offer a viable and economical alternative to a conventional sensor and platform for acquiring high spatial and high temporal resolution data with high operational flexibility. Coastal wetlands are among some of the most challenging and complex ecosystems for land cover prediction and mapping tasks because land cover targets often show high intra-class and low inter-class variances. In recent years, several deep convolutional neural network (CNN) architectures have been proposed for pixel-wise image labeling, commonly called semantic image segmentation. In this paper, some of the more recent deep CNN architectures proposed for semantic image segmentation are reviewed, and each model’s training efficiency and classification performance are evaluated by training it on a limited labeled image set. Training samples are provided using the hyper-spatial resolution UAS imagery over a wetland area and the required ground truth images are prepared by manual image labeling. Experimental results demonstrate that deep CNNs have a great potential for accurate land cover prediction task using UAS hyper-spatial resolution images. Some simple deep learning architectures perform comparable or even better than complex and very deep architectures with remarkably fewer training epochs. This performance is especially valuable when limited training samples are available, which is a common case in most RS applications

    A Deep Learning Approach to Urban Street Functionality Prediction Based on Centrality Measures and Stacked Denoising Autoencoder

    No full text
    In urban planning and transportation management, the centrality characteristics of urban streets are vital measures to consider. Centrality can help in understanding the structural properties of dense traffic networks that affect both human life and activity in cities. Many cities classify urban streets to provide stakeholders with a group of street guidelines for possible new rehabilitation such as sidewalks, curbs, and setbacks. Transportation research always considers street networks as a connection between different urban areas. The street functionality classification defines the role of each element of the urban street network (USN). Some potential factors such as land use mix, accessible service, design goal, and administrators’ policies can affect the movement pattern of urban travelers. In this study, nine centrality measures are used to classify the urban roads in four cities evaluating the structural importance of street segments. In our work, a Stacked Denoising Autoencoder (SDAE) predicts a street’s functionality, then logistic regression is used as a classifier. Our proposed classifier can differentiate between four different classes adopted from the U.S. Department of Transportation (USDT): principal arterial road, minor arterial road, collector road, and local road. The SDAE-based model showed that regular grid configurations with repeated patterns are more influential in forming the functionality of road networks compared to those with less regularity in their spatial structure

    FogNet: A multiscale 3D CNN with double-branch dense block and attention mechanism for fog prediction

    No full text
    The reduction of visibility adversely affects land, marine, and air transportation. Thus, the ability to skillfully predict fog would provide utility. We predict fog visibility categories below 1600 m, 3200 m and 6400 m by post-processing numerical weather prediction model output and satellite-based sea surface temperature (SST) using a 3D-Convolutional Neural Network (3D-CNN). The target is an airport located on a barrier island adjacent to a major US port; measured visibility from this airport serves as a proxy for fog that develops over the port. The features chosen to calibrate and test the model originate from the North American Mesoscale Forecast System, with values of each feature organized on a 32 × 32 horizontal grid; the SSTs were obtained from the NASA Multiscale Ultra Resolution dataset. The input to the model is organized as a high dimensional cube containing 288 to 384 layers of 2D horizontal fields of meteorological variables (predictor maps). In this 3D-CNN (hereafter, FogNet), two parallel branches of feature extraction have been designed, one for spatially auto-correlated features (spatial-wise dense block and attention module), and the other for correlation between input variables (variable-wise dense block and attention mechanism.) To extract features representing processes occurring at different scales, a 3D multiscale dilated convolution is used. Data from 2009 to 2017 (2018 to 2020) are used to calibrate (test) the model. FogNet performance results for 6, 12−and 24−hlead times are compared to results from the High-Resolution Ensemble Forecast (HREF) system. FogNet outperformed HREF using 8 standard evaluation metrics

    A Deep Learning Based Method to Delineate the Wet/Dry Shoreline and Compute Its Elevation Using High-Resolution UAS Imagery

    No full text
    Automatically detecting the wet/dry shoreline from remote sensing imagery has many benefits for beach management in coastal areas by enabling managers to take measures to protect wildlife during high water events. This paper proposes the use of a modified HED (Holistically-Nested Edge Detection) architecture to create a model for automatic feature identification of the wet/dry shoreline and to compute its elevation from the associated DSM (Digital Surface Model). The model is generalizable to several beaches in Texas and Florida. The data from the multiple beaches was collected using UAS (Uncrewed Aircraft Systems). UAS allow for the collection of high-resolution imagery and the creation of the DSMs that are essential for computing the elevations of the wet/dry shorelines. Another advantage of using UAS is the flexibility to choose locations and metocean conditions, allowing to collect a varied dataset necessary to calibrate a general model. To evaluate the performance and the generalization of the AI model, we trained the model on data from eight flights over four locations, tested it on the data from a ninth flight, and repeated it for all possible combinations. The AP and F1-Scores obtained show the success of the model’s prediction for the majority of cases, but the limitations of a pure computer vision assessment are discussed in the context of this coastal application. The method was also assessed more directly, where the average elevations of the labeled and AI predicted wet/dry shorelines were compared. The absolute differences between the two elevations were, on average, 2.1 cm, while the absolute difference of the elevations’ standard deviations for each wet/dry shoreline was 2.2 cm. The proposed method results in a generalizable model able to delineate the wet/dry shoreline in beach imagery for multiple flights at several locations in Texas and Florida and for a range of metocean conditions
    corecore